108 research outputs found
A simple one sweep algorithm for optimal APP symbol decoding of linear block codes
Soft-input/soft-output symbol decoding plays a significant role in iterative decoding. We propose a simple optimal soft-input/soft-output symbol decoding algorithm for linear block codes which requires one forward recursion using a trellis. For many codes the decoding complexity is lower than previous methods, such as the algorithm by Bahl et al. (1974), and the decrease is shown at its most when decoding Hamming codes
Terminated LDPC Convolutional Codes with Thresholds Close to Capacity
An ensemble of LDPC convolutional codes with parity-check matrices composed
of permutation matrices is considered. The convergence of the iterative belief
propagation based decoder for terminated convolutional codes in the ensemble is
analyzed for binary-input output-symmetric memoryless channels using density
evolution techniques. We observe that the structured irregularity in the Tanner
graph of the codes leads to significantly better thresholds when compared to
corresponding LDPC block codes.Comment: To appear in the proceedings of the 2005 IEEE International Symposium
on Information Theory, Adelaide, Australia, September 4-9, 200
Active distances and cascaded convolutional codes
A family of active distances for convolutional codes is introduced. Lower bounds are derived for the ensemble of periodically time-varying convolutional codes
Braided Convolutional Codes: A New Class of Turbo-Like Codes
We present a new class of iteratively decodable turbo-like codes, called braided convolutional codes. Constructions and encoding procedures for tightly and sparsely braided convolutional codes are introduced. Sparsely braided codes exhibit good convergence behavior with iterative decoding, and a statistical analysis using Markov permutors shows that the free distance of these codes grows linearly with constraint length, i.e., they are asymptotically good
Analysis and Design of Tuned Turbo Codes
It has been widely observed that there exists a fundamental trade-off between
the minimum (Hamming) distance properties and the iterative decoding
convergence behavior of turbo-like codes. While capacity achieving code
ensembles typically are asymptotically bad in the sense that their minimum
distance does not grow linearly with block length, and they therefore exhibit
an error floor at moderate-to-high signal to noise ratios, asymptotically good
codes usually converge further away from channel capacity. In this paper, we
introduce the concept of tuned turbo codes, a family of asymptotically good
hybrid concatenated code ensembles, where asymptotic minimum distance growth
rates, convergence thresholds, and code rates can be traded-off using two
tuning parameters, {\lambda} and {\mu}. By decreasing {\lambda}, the asymptotic
minimum distance growth rate is reduced in exchange for improved iterative
decoding convergence behavior, while increasing {\lambda} raises the asymptotic
minimum distance growth rate at the expense of worse convergence behavior, and
thus the code performance can be tuned to fit the desired application. By
decreasing {\mu}, a similar tuning behavior can be achieved for higher rate
code ensembles.Comment: Accepted for publication in IEEE Transactions on Information Theor
- …